.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Philosophy Of Test Automation

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

In the Goals of Test Automation narrative I described many of the goals and benefits of having an effective test automation program in place. This chapter introduces some differences in the way various people think about design, construction and testing that change the way they might naturally apply these patterns. The "big picture" questions include whether we write tests first or last, whether we think of them as tests or examples, whether we build the software from the inside-out or outside-in, whether we verify state or behavior and whether we design the fixture up front or test by test.

Why is Philosophy Important?

What's philosophy got to do with test automation? A lot! Our outlook on life (and testing) greatly affects how we go about automating tests. While discussing an early draft of this book with Martin Fowler (the series editor), we came to the conclusion that there were philosophical differences between how various people approached xUnit-based test automation. These differences were at the heart of why, for example, some people use Mock Objects (page X) sparingly and others use them everywhere.

Since that eye-opening discussion, I have kept a watch out for other philosophical differences. These tend to come up as a result of someone saying "I never (find a need to) use that pattern" or "I never run into that smell." I have discovered that by questioning these statements I learn a lot about the testing philosophy of the speaker. Out of these discussions have come the following philosophical differences:

Some Philosophical Differences

Test First or Last?

Traditional software development prepares and executes tests after the all software is designed and coded. This is true for both customer tests and unit tests. The agile community has made writing the tests first the standard way of doing things. Now, one might ask, "Why is this important?" Anyone who has tried to retrofit Fully Automated Tests (see Goals of Test Automation) onto a legacy system will tell you how much harder it is to write the tests after the fact. Just having the discipline to write automated unit tests after the software "is already finished" is hard whether or not it is easy to do. Even if we design for testability, the likelihood that we can write the tests easily and naturally without modifying the production code is low. When tests are written first, the design of the system is inherently testable.

There are other advantages to writing the tests first. When tests are written first and we write only enough code to make the tests pass, the production code tends to be more minimalist. Functionality that is optional tends not to be written; no effort goes into fancy error handling code that doesn't work. The tests tend to be more robust because the right methods are provided on each object based on the tests' needs.

Access to the state of the object for the purposes of fixture setup and result verification comes much more naturally if the software is written "test first". For example, we may avoid the test smell Sensitive Equality (see Fragile Test on page X) entirely because the correct attributes of objects are used in assertions rather than comparing the string representations of those objects. We may even find that we don't need to implement toString at all because we have no real need for it. The ability to substitute dependencies with Test Doubles (page X) for the purpose of verifying the outcome is also greatly enhanced because substitutable dependency is designed into the software from the start.

Tests or Examples?

When I first mention the concept of writing automated tests for software before the software has been written, some listeners get strange looks on their faces. "How can you possibly write tests for software that doesn't exist?" In these cases I have followed Brian Marrick's lead by reframing the discussion to talk about "examples" and example-driven development. It seems that examples are much easier for some people to envision writing before code than "tests". That the examples are executable and will tell you whether or not they have been successfully satisfied can be left for a later discussion with people who have a bit more imagination.

By the time this book is in your hands I fully expect to see a family of example-driven development frameworks. The Ruby-based RSpec kicked off the reframing of TDD to EDD and the Java-based JBehave followed shortly thereafter. The basic design of these "unit test frameworks" is the same as xUnit but the terminology has changed to reflect the Executable Specification (see Goals of Test Automation) mind set. Another popular alternative for specifying components that contain business logic is to use Fit tests. These will invariably be more readable by non-technical people than something written in a programming language regardless of how "business friendly" we make the syntax!

Test-by-Test or All-At-Once?

The test-driven development process encourages us to "write a test" then "write some code" to pass that test. It isn't a case of all the tests being written before any code, rather, the writing of tests and code are interleaved in a very fine grained way. "Test a bit, code a bit, test a bit more." This is incremental development at its finest grain. Is this the only way to do things? Not at all! Some developers prefer to identify all the tests needed by the current feature before starting any coding. This has the advantage of letting them "think like a client" or "think like a tester" and avoids being sucked into "solution mode" too early.

Test-driven purists argue that we can design more incrementally if we build the software one test at a time. "It's easier to stay focused if we only have a single test failing." Many test drivers report not using the debugger very much because the fine-grained testing and incremental development leave little doubt about why tests are failing; the tests provide Defect Localization (see Goals of Test Automation) and the last change we made (which caused the problem) is still fresh in our minds.

This is especially relevant when talking about unit tests because we can choose when to enumerate the detailed requirements (tests) of each object or method. A reasonable compromise is to identify all the unit tests at the beginning of a task (possibly roughing in empty Test Method (page X) bodies) but only coding a single test body at a time. We could also code all the Test Method bodies and then disable all but one of the tests so we can focus on building the production code one test at a time.

With customer tests, we probably don't want to feed the tests to the developer one by one within a user story but it does make sense to prepare all the tests for a single story before development of the story is started. Some teams prefer to have the customer tests for the story identified before they will estimate the effort to build the story as the tests help frame the story.

Outside-In or Inside-Out?

Designing the software from the outside inwards implies thinking first about black-box customer tests (a.k.a. "story tests") for the entire system and then thinking about unit tests for each piece of software we design. Along the way we may also implement component tests for the large-grained components we decide to build.

Each of these sets of tests causes us to "think like the client" well before we start thinking like a software developer. We focus first on the interface we provide to the user of the software whether it be a person or another piece of software. The tests capture these usage patterns and help us enumerate all the scenarios we need to support. Only when we have identified all the tests are we "finished" specifying.

Some people prefer to design outside-in but then code inside-out to avoid dealing with the "dependency problem". This requires anticipating the needs of the outer software when writing the tests for the inner software. It also means that we don't actually test the outer software in isolation of the inner software. The following diagram illustrate this. The top to bottom progression implies the order in which we write the software. Tests for the middle and lower classes can take advantage of the already-built classes above them. This avoids the need for Test Stubs (page X) or Mock Objects in many of the tests. We may still need to use them for those tests where the inner components could, but can't be made to, return specific values or throw exceptions. In this case, a Saboteur (see Test Stub) comes in very handy.



Sketch Inside-Out Development embedded from Inside-Out Development.gif

Fig. X: "Inside-Out" Development of functionality.

Development starts with the innermost components and proceeds towards the user interface building on the previously constructed components.

Other test drivers prefer to design and code outside-in. Writing the code outside-in forces us to deal with the "dependency problem". We can use Test Stubs to stand in for the software we haven't yet written so that the outer layer of software can be executed and tested. We can also use the Test Stubs to inject "impossible" indirect inputs (return values, out parameters or exceptions) into our system under test (SUT) to verify that it handles them correctly.

On the other hand, building from the inside-out allows us to layer our SUT on top of the existing software and use Test Doubles sparingly to inject only those indirect inputs that the inner layers of software do not return during normal usage (error scenarios where the lower layer of software returns error codes or throws exceptions.)

In this next diagram, note how we have reversed the order in which we are building our classes. Because the subordinate classes don't exist yet, we have had to use Test Doubles to stand in for them.



Sketch Outside-In Development embedded from Outside-In Development.gif

Fig. X: "Outside-In" development of functionality supported by Test Doubles.

Development starts at the outside using Test Doubles in place of the depended-on components (DOCs) and proceeds inwards as requirements for each DOC are identified.

Once the subordinate classes have been built, we could remove the Test Doubles from many of the tests. Keeping them gives us better Defect Localization at the cost of potentially higher test maintenance cost.

State or Behavior Verification?

From writing code outside-in it is a small step to verifying behavior rather than just state. The "statist" view is that it is sufficient to put the SUT into a specific state, exercise it, and verify that it is in the expected state at the end of the test. The "behaviorist" view is that we should specify not only the start and end state of an object but the calls it makes to its dependencies. That is, we should specify the details of the calls to the "outgoing interfaces" of the SUT. I call these "outgoing calls" the indirect outputs of the SUT because they are outputs just like the values returned by functions except that we must use special measures to trap them because they don't come directly back to the client or test.

The behaviorist school of thought is sometimes called behavior-driven development. It is evidenced by the copious use of Mock Objects or Test Spys (page X) throughout the tests. It does a better job of testing each unit of software in isolation at a possible cost of more difficult refactoring. Martin Fowler provides a detailed discussion of the two approaches in [MAS].

Fixture Design Up Front or Test-by-Test?

In the traditional test community it is pretty common to define a "test bed" consisting of the application and a database with a variety of test data already populated. The content of the database is carefully designed to allow many different test scenarios to be exercised.

When the fixture for xUnit tests is approached in a similar manner, the test automater may defined a Standard Fixture (page X) that they then use for all the Test Method of one (or more) Testcase Classes (page X). This fixture may be set up as a Fresh Fixture (page X) in each Test Method using Delegated Setup (page X) or in the setUp method using Implicit Setup (page X). Or it can be set up as a Shared Fixture (page X). Either way, it makes it harder for the test reader to determine what parts of the fixture are truly pre-conditions for a particular Test Method.

The more agile approach is to custom design a Minimal Fixture (page X) for each test. There is no "big fixture design up front" activity. This approach is best implemented using a Fresh Fixture.

When Philosophies Differ

We won't always get the people we work with to adopt our philosophy but understanding that they subscribe to a different philosophy helps us understand why they do things differently. It's not that they don't share the same goals as ours; (E.g. high quality software, fit for purpose, on time, under budget.) they just make the decisions about how to achieve the goals using a different philosophy. Understanding that different philosophies exist and which ones we each subscribe to is a good first step towards finding some common ground.

My Philosophy

In case you were wondering what my personal philosophy is, it is:

There! Now you know where I'm coming from.

What's Next?

This chapter introduced the different philosophies to approaching software design, construction, testing and test automation. In the Principles of Test Automation narrative chapter I describe key principles we should follow to help us achieve the goals described in the Goals of Test Automation narrative. That will set us up to start looking at our overall test automation strategy and the individual patterns.



Page generated at Wed Feb 09 16:39:26 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation